摘要 :
Many wireless sensor networks applications, e.g., structural health monitoring (SHM), require the sensors to construct a multihop network to collect the environmental data in real-time. These sensors generally generate sensing dat...
展开
Many wireless sensor networks applications, e.g., structural health monitoring (SHM), require the sensors to construct a multihop network to collect the environmental data in real-time. These sensors generally generate sensing data in fixed rates, so their transmission schedules can be deterministically listed. Time division multiple access (TDMA) is especially appropriate for these applications because it can prevent radio interference, thereby reducing the transmission power and maximizing wireless spectrum reuse. However, to reserve sufficient bandwidths on distinct links of a heterogeneous WSN, a complex TDMA schedule is necessary, and a sensor node might need to keep a large TDMA schedule table in its tiny memory. To prevent a large size TDMA schedule table, this paper proposes a CyclicMAC scheduler that assigns each node a temporal transmission pattern which is merely parameterized by period and phase. The CyclicMAC scheduler determines the period to satisfy the bandwidth requirement of the node, and adjusts the phase to achieve collision-freeness and reduce the end-to-end latency as well. The end-to-end latency of the resulting schedule is proven to be optimal if the wireless links only interfere with their parent link and sibling links. As far as we know, CyclicMAC is the first that simultaneously addresses the three design issues of TDMA scheduling, which satisfies heterogeneous bandwidth requirements, minimizing schedule table size, and reducing end-to-end latency, for multihop wireless sensor networks.
收起
摘要 :
Multicast communications is widely used by streaming video applications to reduce both server load and network bandwidth. However, receivers in a multicast group must access the multicast stream simultaneously, and this restrictio...
展开
Multicast communications is widely used by streaming video applications to reduce both server load and network bandwidth. However, receivers in a multicast group must access the multicast stream simultaneously, and this restriction on synchronous access diminishes the benefit of multicast because users in a video-on-demand service usually issue requests asyn-chronously, i.e., at anytime. In this paper, we not only formulate this streaming problem but also propose a new multicast infrastructure, called buffer-assisted on-demand multicast, to allow receivers accessing a multicast stream asynchronously. A timing control mechanism is integrated on intermediate routing nodes (e.g., routers, proxies, or peer nodes in a peer-to-peer network) to branch time-variant multicast sub-streams to corresponding receivers. Besides, an optimal routing path and the corresponding buffer allocations for each request must be carefully determined to maximize the throughput of the multicast stream. We prove that the time complexity to solve this routing problem over general graph networks is NP-complete, and then propose a routing algorithm for overlay networks to minimize server load. Simulation results demonstrate that buffer-assisted on-demand multicast outperforms many popular streaming methods.
收起
摘要 :
Proxy-caching strategies, especially prefix caching and interval caching, are commonly used in video-on-demand (VOD) systems to improve both the system performance and the playback experience of users. However, because these cachi...
展开
Proxy-caching strategies, especially prefix caching and interval caching, are commonly used in video-on-demand (VOD) systems to improve both the system performance and the playback experience of users. However, because these caching strategies are designed for homogeneous clients, they do not perform well in the real world where clients are heterogeneous (i.e., different available network bandwidths and different sizes of client-side buffers). This paper investigates the problems caused by heterogeneous client-side buffers. We analyze the theoretical performance of these caching strategies, and then, derive cost functions to measure the corresponding performance gains. Based on these analytical results, we develop a caching strategy that employs both prefix caching and interval caching to minimize the input bandwidth of a proxy. The simulation results demonstrate that the bandwidth requirements of a proxy implementing our caching strategy are significantly lower compared to adopting prefix caching or interval caching alone.
收起
摘要 :
By forwarding the server stream client by client, a chaining-based scheme is a good way to reduce the server streams for streaming applications in well-connected networks. In this paper, we prove that the minimum number of require...
展开
By forwarding the server stream client by client, a chaining-based scheme is a good way to reduce the server streams for streaming applications in well-connected networks. In this paper, we prove that the minimum number of required server streams in such schemes is n-k+1, where n is the number of client requests and k is a value determined by client buffer sizes and the distribution of requests. In addition, we present an optimal chaining algorithm using a dynamic buffer allocation strategy. Compared to existing chaining schemes, our scheme not only utilizes the backward (basic chaining) and/or forward (adaptive chaining) buffer, but also exploits the buffers of other clients in order to extend the chain as much as possible. In this way, more clients can be chained together and served by the same server stream. Our simulation results show that the requirements of the server streams in the presented scheme are much lower those of existing chaining schemes. We also introduce mechanisms for handling VCR functions and fault exceptions in practical applications.
收起